perm filename PHIL.ART[F75,JMC] blob
sn#181243 filedate 1975-10-15 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 Introduction
C00006 ENDMK
Cā;
Introduction
This article treats some philosophical problems - giving
definitions for belief, wanting, ability, causality and counterfactuals -
using some new tools - %2ascriptive definitions%1, %2relatively
meaningful sentences%1, and %2automaton models%1. The aim is both
to explicate the concepts and to explain and illustrate the new tools.
The origin of this work is in artificial intelligence which
imposes the requirement that the definitions apply to machines, especially
computer programs, and that they be precise enough to be used by
computer programs. Many philosophers have argued that it is impossible
to make satisfactory definitions of concepts like belief and wanting
that apply to machines and have argued from this to the conclusion that
humans cannot be like machines.
This is because the traditional modes of definition in terms of behavior
and structure are unsuitable.
%2Ascriptive definitions%1 of belief and wanting are most easily
given and understood for machines whose structure is discrete and completely
known. The definitions can then be extended to systems with several levels
of organizations such as people and groups of people, but the additional
notion of %2relatively meaningful sentence%1 is helpful here and maybe
necessary.
The problems of reductionism are conveniently discussed in terms
of systems of automata with several levels of organization. It turns out
that statements of causality and counterfactual conditionals formulated
in terms of the higher levels of organizations cannot be "reduced" to
statements at lower levels even when the structure is entirely known.
Actually all three techniques presented here have been used before.
In fact, we will argue that they are used in common sense thinking. However,
this article may be the first to exhibit them explicitly.
A final section illustrates our claim that some philosophical puzzles
result from not recognizing these tools.